Journal article

High intrinsic dimensionality facilitates adversarial attack: Theoretical evidence

L Amsaleg, J Bailey, A Barbe, SM Erfani, T Furon, ME Houle, M Radovanovic, XV Nguyen

IEEE Transactions on Information Forensics and Security | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | Published : 2021

Abstract

Machine learning systems are vulnerable to adversarial attack. By applying to the input object a small, carefully-designed perturbation, a classifier can be tricked into making an incorrect prediction. This phenomenon has drawn wide interest, with many attempts made to explain it. However, a complete understanding is yet to emerge. In this paper we adopt a slightly different perspective, still relevant to classification. We consider retrieval, where the output is a set of objects most similar to a user-supplied query object, corresponding to the set of k-nearest neighbors. We investigate the effect of adversarial perturbation on the ranking of objects with respect to a query. Through theoret..

View full abstract

University of Melbourne Researchers

Grants

Awarded by Australian Research Council


Funding Acknowledgements

The work of Laurent Amsaleg was supported in part by the European CHIST-ERA ID_IOT project. The work of James Bailey and Sarah M. Erfani was supported in part by the Australian Research Council under Grant DP140101969. The work of Teddy Furon was supported by the ANR-AID Chaire SAIDA. The work of Michael E. Houle was supported by the JSPS Kakenhi Kiban (B) Research under Grant 18H03296. The work of Milos Radovanovi ' c was supported by the Serbian National Project under Grant OI174023. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Yao Liu.